# Information Retrieval

Chonky Modernbert Large 1
MIT
Chonky is a Transformer model capable of intelligently splitting text into meaningful semantic chunks, suitable for RAG systems.
Sequence Labeling Transformers English
C
mirth
54
2
Set Encoder Novelty Base
Apache-2.0
Set-Encoder is a cross-encoder architecture specifically designed for efficient and permutation-invariant paragraph reordering, particularly suitable for novelty-aware re-ranking tasks.
Text Embedding
S
webis
14
0
Lightblue Reranker 0.5 Bincont Filt Gguf
This is a text ranking model used for sorting text by relevance.
Text Embedding
L
RichardErkhov
2,054
0
Rank1 0.5b
MIT
rank1 is an information retrieval reranking model trained on Qwen2.5-0.5B, improving relevance judgment accuracy through generated reasoning chains.
Large Language Model Transformers English
R
jhu-clsp
21
0
Set Encoder Base
Apache-2.0
Set-Encoder is a cross-encoder architecture specifically designed for efficient and permutation-invariant paragraph reordering.
Text Embedding Safetensors
S
webis
295
1
Robbert 2023 Dutch Base Cross Encoder
A sentence embedding model based on the transformers library, used for generating vector representations of sentences, supporting text ranking tasks.
Text Embedding Transformers
R
NetherlandsForensicInstitute
118
2
Splade V3 Doc
SPLADE-v3-Doc is the document version of the SPLADE model, focusing on document-side inference and suitable for scenarios such as information retrieval.
Text Embedding Transformers English
S
naver
2,223
1
K Finance Sentence Transformer
This is a sentence-transformers-based sentence embedding model that maps text to a 768-dimensional vector space, suitable for semantic search and clustering tasks.
Text Embedding Transformers
K
ohsuz
160
1
Gte Large Onnx
Apache-2.0
GTE-Large is a text embedding model ported to ONNX, suitable for text classification and similarity search tasks.
Text Embedding Transformers
G
Qdrant
597
2
Gte Tiny
GTE Tiny is a small general-purpose text embedding model suitable for various natural language processing tasks.
Text Embedding Transformers
G
TaylorAI
74.46k
138
Dragon Plus Context Encoder
DRAGON+ is a dense retrieval model based on the BERT architecture, employing an asymmetric dual-encoder architecture, suitable for text retrieval tasks.
Text Embedding Transformers
D
facebook
4,396
39
Raw 2 No 1 Test 2 New.model
This is a model based on sentence-transformers that maps sentences and paragraphs into a 768-dimensional dense vector space, suitable for tasks such as sentence similarity calculation and semantic search.
Text Embedding Transformers
R
Wheatley961
13
0
Bertje Visio Retriever
This is a model based on sentence-transformers that maps sentences and paragraphs into a 768-dimensional dense vector space, suitable for tasks such as sentence similarity calculation and semantic search.
Text Embedding Transformers
B
GeniusVoice
14
0
Ukhushn
This is a sentence-transformers-based model capable of mapping sentences and paragraphs into a 768-dimensional dense vector space, suitable for tasks like clustering or semantic search.
Text Embedding Transformers
U
Ukhushn
35
0
Healthcare 27.03.2021 27.03.2022 Redditflow
This is a model based on sentence-transformers, capable of mapping sentences and paragraphs into a 768-dimensional dense vector space, suitable for tasks such as sentence similarity calculation and semantic search.
Text Embedding Transformers
H
NFflow
31
0
English Phrases Bible
Apache-2.0
A sentence embedding model based on DistilBert TAS-B, optimized for semantic search tasks, capable of mapping text to a 768-dimensional vector space
Text Embedding Transformers
E
iamholmes
28
0
Nfcorpus Tsdae Msmarco Distilbert Gpl
This is a model based on sentence-transformers that can map sentences and paragraphs into a 768-dimensional dense vector space, suitable for tasks such as sentence similarity calculation and semantic search.
Text Embedding Transformers
N
GPL
31
0
Climate Fever Tsdae Msmarco Distilbert Gpl
This is a sentence embedding model based on sentence-transformers, which maps text to a 768-dimensional vector space, suitable for tasks such as semantic search and text similarity calculation.
Text Embedding Transformers
C
GPL
31
0
Webis Touche2020 Distilbert Tas B Gpl Self Miner
This is a model based on sentence-transformers, capable of mapping sentences and paragraphs into a 768-dimensional dense vector space, suitable for tasks such as sentence similarity calculation and semantic search.
Text Embedding Transformers
W
GPL
31
0
Dpr Question Encoder Bert Uncased L 2 H 128 A 2
Apache-2.0
DPR question encoder model based on BERT architecture for dense passage retrieval tasks
Text Embedding Transformers
D
nlpconnect
21
0
Distilbert Base Mean Pooling
This is a sentence embedding model based on DistilBERT, capable of converting text into 768-dimensional vector representations, suitable for sentence similarity calculation and semantic search tasks.
Text Embedding Transformers
D
jgammack
2,340
5
Msmarco MiniLM L6 En De V1
Apache-2.0
This is a cross-lingual cross-encoder model suitable for English-German bilingual paragraph re-ranking tasks, trained based on the MS Marco passage ranking task.
Text Embedding Transformers Supports Multiple Languages
M
cross-encoder
2,784
12
Contriever Msmarco
A fine-tuned version of the Contriever pre-trained model, optimized for dense information retrieval tasks and trained using contrastive learning methods
Text Embedding Transformers
C
facebook
24.08k
27
Bioasq 1m Tsdae Msmarco Distilbert Gpl
This is a sentence embedding model based on sentence-transformers, capable of converting text into 768-dimensional vector representations, suitable for tasks such as semantic search and text similarity calculation.
Text Embedding Transformers
B
GPL
31
0
Mt5 Base Finetuned Tydiqa Xqa
This model is a multilingual Q&A model fine-tuned on the TyDi QA dataset based on Google's mT5-base, supporting Q&A tasks in 101 languages.
Question Answering System Transformers Other
M
Narrativa
368
6
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase